101 research outputs found
Analyzing Visual Mappings of Traditional and Alternative Music Notation
In this paper, we postulate that combining the domains of information
visualization and music studies paves the ground for a more structured analysis
of the design space of music notation, enabling the creation of alternative
music notations that are tailored to different users and their tasks. Hence, we
discuss the instantiation of a design and visualization pipeline for music
notation that follows a structured approach, based on the fundamental concepts
of information and data visualization. This enables practitioners and
researchers of digital humanities and information visualization, alike, to
conceptualize, create, and analyze novel music notation methods. Based on the
analysis of relevant stakeholders and their usage of music notation as a mean
of communication, we identify a set of relevant features typically encoded in
different annotations and encodings, as used by interpreters, performers, and
readers of music. We analyze the visual mappings of musical dimensions for
varying notation methods to highlight gaps and frequent usages of encodings,
visual channels, and Gestalt laws. This detailed analysis leads us to the
conclusion that such an under-researched area in information visualization
holds the potential for fundamental research. This paper discusses possible
research opportunities, open challenges, and arguments that can be pursued in
the process of analyzing, improving, or rethinking existing music notation
systems and techniques.Comment: 5 pages including references, 3rd Workshop on Visualization for the
Digital Humanities, Vis4DH, IEEE Vis 201
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
We propose a framework for interactive and explainable machine learning that
enables users to (1) understand machine learning models; (2) diagnose model
limitations using different explainable AI methods; as well as (3) refine and
optimize the models. Our framework combines an iterative XAI pipeline with
eight global monitoring and steering mechanisms, including quality monitoring,
provenance tracking, model comparison, and trust building. To operationalize
the framework, we present explAIner, a visual analytics system for interactive
and explainable machine learning that instantiates all phases of the suggested
pipeline within the commonly used TensorBoard environment. We performed a
user-study with nine participants across different expertise levels to examine
their perception of our workflow and to collect suggestions to fill the gap
between our system and framework. The evaluation confirms that our tightly
integrated system leads to an informed machine learning process while
disclosing opportunities for further extensions.Comment: 9 pages paper, 2 pages references, 5 pages supplementary material
(ancillary files
Interactive Visual Explanation of Incremental Data Labeling
We present a visual analytics approach for the in-depth analysis and explanation of incremental machine learning processes that are based on data labeling. Our approach offers multiple perspectives to explain the process, i.e., data characteristics, label distribution, class characteristics, and classifier characteristics. Additionally, we introduce metrics from which we derive novel aggregated analytic views that enable the analysis of the process over time. We demonstrate the capabilities of our approach in a case study and thereby demonstrate how our approach improves the transparency of the iterative learning process
RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback
To use reinforcement learning from human feedback (RLHF) in practical
applications, it is crucial to learn reward models from diverse sources of
human feedback and to consider human factors involved in providing feedback of
different types. However, the systematic study of learning from diverse types
of feedback is held back by limited standardized tooling available to
researchers. To bridge this gap, we propose RLHF-Blender, a configurable,
interactive interface for learning from human feedback. RLHF-Blender provides a
modular experimentation framework and implementation that enables researchers
to systematically investigate the properties and qualities of human feedback
for reward learning. The system facilitates the exploration of various feedback
types, including demonstrations, rankings, comparisons, and natural language
instructions, as well as studies considering the impact of human factors on
their effectiveness. We discuss a set of concrete research opportunities
enabled by RLHF-Blender. More information is available at
https://rlhfblender.info/.Comment: 14 pages, 3 figure
Risk the drift! Stretching disciplinary boundaries through critical collaborations between the humanities and visualization
In this paper, we discuss collaborations that can emerge between humanities and visualization researchers. Based on four case studies we illustrate different collaborative constellations within such cross-disciplinary projects that are influenced as much by the general project goals as by the expertise, disciplinary background and individual aims of the involved researchers. We found that such collaborations can introduce productive tensions that stretch the boundaries of visualization research and the involved humanities fields, often leaving team members "adrift'' trying to make sense of findings that are the result of a mixture of different (sometimes competing) research questions, methodologies, and underlying assumptions. We discuss inherent challenges and productive synergies that these drifts can introduce. We argue that greater critical attention must be brought to the collaborative process itself in order to facilitate effective cross-disciplinary collaborations, and also enhance potential contributions and research impact for all involved disciplines. We introduce a number of guiding questions to facilitate critical awareness and reflection throughout the collaborative process, allowing for more transparency, productive communication, and equal participation within research teams.PostprintPeer reviewe
Towards a Rigorous Evaluation of XAI Methods on Time Series
Explainable Artificial Intelligence (XAI) methods are typically deployed to
explain and debug black-box machine learning models. However, most proposed XAI
methods are black-boxes themselves and designed for images. Thus, they rely on
visual interpretability to evaluate and prove explanations. In this work, we
apply XAI methods previously used in the image and text-domain on time series.
We present a methodology to test and evaluate various XAI methods on time
series by introducing new verification techniques to incorporate the temporal
dimension. We further conduct preliminary experiments to assess the quality of
selected XAI method explanations with various verification methods on a range
of datasets and inspecting quality metrics on it. We demonstrate that in our
initial experiments, SHAP works robust for all models, but others like
DeepLIFT, LRP, and Saliency Maps work better with specific architectures.Comment: 5 Pages 1 Figure 1 Table 1 Page Reference - 2019 ICCV Workshop on
Interpreting and Explaining Visual Artificial Intelligence Model
CommAID: Visual Analytics for Communication Analysis through Interactive Dynamics Modeling
Communication consists of both meta-information as well as content.
Currently, the automated analysis of such data often focuses either on the
network aspects via social network analysis or on the content, utilizing
methods from text-mining. However, the first category of approaches does not
leverage the rich content information, while the latter ignores the
conversation environment and the temporal evolution, as evident in the
meta-information. In contradiction to communication research, which stresses
the importance of a holistic approach, both aspects are rarely applied
simultaneously, and consequently, their combination has not yet received enough
attention in automated analysis systems. In this work, we aim to address this
challenge by discussing the difficulties and design decisions of such a path as
well as contribute CommAID, a blueprint for a holistic strategy to
communication analysis. It features an integrated visual analytics design to
analyze communication networks through dynamics modeling, semantic pattern
retrieval, and a user-adaptable and problem-specific machine learning-based
retrieval system. An interactive multi-level matrix-based visualization
facilitates a focused analysis of both network and content using inline visuals
supporting cross-checks and reducing context switches. We evaluate our approach
in both a case study and through formative evaluation with eight law
enforcement experts using a real-world communication corpus. Results show that
our solution surpasses existing techniques in terms of integration level and
applicability. With this contribution, we aim to pave the path for a more
holistic approach to communication analysis.Comment: 12 pages, 7 figures, Computer Graphics Forum 2021 (pre-peer reviewed
version
- …